718 research outputs found

    Optimal testing of equivalence hypotheses

    Full text link
    In this paper we consider the construction of optimal tests of equivalence hypotheses. Specifically, assume X_1,..., X_n are i.i.d. with distribution P_{\theta}, with \theta \in R^k. Let g(\theta) be some real-valued parameter of interest. The null hypothesis asserts g(\theta)\notin (a,b) versus the alternative g(\theta)\in (a,b). For example, such hypotheses occur in bioequivalence studies where one may wish to show two drugs, a brand name and a proposed generic version, have the same therapeutic effect. Little optimal theory is available for such testing problems, and it is the purpose of this paper to provide an asymptotic optimality theory. Thus, we provide asymptotic upper bounds for what is achievable, as well as asymptotically uniformly most powerful test constructions that attain the bounds. The asymptotic theory is based on Le Cam's notion of asymptotically normal experiments. In order to approximate a general problem by a limiting normal problem, a UMP equivalence test is obtained for testing the mean of a multivariate normal mean.Comment: Published at http://dx.doi.org/10.1214/009053605000000048 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Explicit nonparametric confidence intervals for the variance with guaranteed coverage

    Get PDF
    In this paper, we provide a method for constructing confidence intervals for the variance that exhibit guaranteed coverage probability for any sample size, uniformly over a wide class of probability distributions. In contrast, standard methods achieve guaranteed coverage only in the limit for a fixed distribution or for any sample size over a very restrictive (parametric) class of probability distributions. Of course, it is impossible to construct effective confidence intervals for the variance without some restriction, due to a result of Bahadur and Savage (1956). However, it is possible if the observations lie in a fixed compact set. We also consider the case of lower confidence bounds without any support restriction. Our method is based on the behavior of the variance over distributions that lie within a Kolmogorov-Smirnov confidence band for the underlying distribution. The method is a generalization of an idea of Anderson (1967), who considered only the case of the mean; it applies to very general parameters, and particularly the variance. While typically it is not clear how to compute these intervals explicitly, for the special case of the variance we provide an algorithm to do so. Asymptotically, the length of the intervals is of order n -1/2 in probability), so that, while providing guaranteed coverage, they are not overly conservative. A small simulation study examines the finite sample behavior of the proposed intervals

    Generalizations of the Familywise Error Rate

    Full text link
    Consider the problem of simultaneously testing null hypotheses H_1,...,H_s. The usual approach to dealing with the multiplicity problem is to restrict attention to procedures that control the familywise error rate (FWER), the probability of even one false rejection. In many applications, particularly if s is large, one might be willing to tolerate more than one false rejection provided the number of such cases is controlled, thereby increasing the ability of the procedure to detect false null hypotheses. This suggests replacing control of the FWER by controlling the probability of k or more false rejections, which we call the k-FWER. We derive both single-step and stepdown procedures that control the k-FWER, without making any assumptions concerning the dependence structure of the p-values of the individual tests. In particular, we derive a stepdown procedure that is quite simple to apply, and prove that it cannot be improved without violation of control of the k-FWER. We also consider the false discovery proportion (FDP) defined by the number of false rejections divided by the total number of rejections (defined to be 0 if there are no rejections). The false discovery rate proposed by Benjamini and Hochberg [J. Roy. Statist. Soc. Ser. B 57 (1995) 289-300] controls E(FDP). Here, we construct methods such that, for any \gamma and \alpha, P{FDP>\gamma}\le\alpha. Two stepdown methods are proposed. The first holds under mild conditions on the dependence structure of p-values, while the second is more conservative but holds without any dependence assumptions.Comment: Published at http://dx.doi.org/10.1214/009053605000000084 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    On stepdown control of the false discovery proportion

    Full text link
    Consider the problem of testing multiple null hypotheses. A classical approach to dealing with the multiplicity problem is to restrict attention to procedures that control the familywise error rate (FWERFWER), the probability of even one false rejection. However, if ss is large, control of the FWERFWER is so stringent that the ability of a procedure which controls the FWERFWER to detect false null hypotheses is limited. Consequently, it is desirable to consider other measures of error control. We will consider methods based on control of the false discovery proportion (FDPFDP) defined by the number of false rejections divided by the total number of rejections (defined to be 0 if there are no rejections). The false discovery rate proposed by Benjamini and Hochberg (1995) controls E(FDP)E(FDP). Here, we construct methods such that, for any γ\gamma and α\alpha, P{FDP>γ}≤αP\{FDP>\gamma \}\le \alpha. Based on pp-values of individual tests, we consider stepdown procedures that control the FDPFDP, without imposing dependence assumptions on the joint distribution of the pp-values. A greatly improved version of a method given in Lehmann and Romano \citer10 is derived and generalized to provide a means by which any sequence of nondecreasing constants can be rescaled to ensure control of the FDPFDP. We also provide a stepdown procedure that controls the FDRFDR under a dependence assumption.Comment: Published at http://dx.doi.org/10.1214/074921706000000383 in the IMS Lecture Notes--Monograph Series (http://www.imstat.org/publications/lecnotes.htm) by the Institute of Mathematical Statistics (http://www.imstat.org

    Stepup procedures for control of generalizations of the familywise error rate

    Full text link
    Consider the multiple testing problem of testing null hypotheses H1,...,HsH_1,...,H_s. A classical approach to dealing with the multiplicity problem is to restrict attention to procedures that control the familywise error rate (FWER\mathit{FWER}), the probability of even one false rejection. But if ss is large, control of the FWER\mathit{FWER} is so stringent that the ability of a procedure that controls the FWER\mathit{FWER} to detect false null hypotheses is limited. It is therefore desirable to consider other measures of error control. This article considers two generalizations of the FWER\mathit{FWER}. The first is the k−FWERk-\mathit{FWER}, in which one is willing to tolerate kk or more false rejections for some fixed k≥1k\geq 1. The second is based on the false discovery proportion (FDP\mathit{FDP}), defined to be the number of false rejections divided by the total number of rejections (and defined to be 0 if there are no rejections). Benjamini and Hochberg [J. Roy. Statist. Soc. Ser. B 57 (1995) 289--300] proposed control of the false discovery rate (FDR\mathit{FDR}), by which they meant that, for fixed α\alpha, E(FDP)≤αE(\mathit{FDP})\leq\alpha. Here, we consider control of the FDP\mathit{FDP} in the sense that, for fixed γ\gamma and α\alpha, P{FDP>γ}≤αP\{\mathit{FDP}>\gamma\}\leq \alpha. Beginning with any nondecreasing sequence of constants and pp-values for the individual tests, we derive stepup procedures that control each of these two measures of error control without imposing any assumptions on the dependence structure of the pp-values. We use our results to point out a few interesting connections with some closely related stepdown procedures. We then compare and contrast two FDP\mathit{FDP}-controlling procedures obtained using our results with the stepup procedure for control of the FDR\mathit{FDR} of Benjamini and Yekutieli [Ann. Statist. 29 (2001) 1165--1188].Comment: Published at http://dx.doi.org/10.1214/009053606000000461 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    On the uniform asymptotic validity of subsampling and the bootstrap

    Full text link
    This paper provides conditions under which subsampling and the bootstrap can be used to construct estimators of the quantiles of the distribution of a root that behave well uniformly over a large class of distributions P\mathbf{P}. These results are then applied (i) to construct confidence regions that behave well uniformly over P\mathbf{P} in the sense that the coverage probability tends to at least the nominal level uniformly over P\mathbf{P} and (ii) to construct tests that behave well uniformly over P\mathbf{P} in the sense that the size tends to no greater than the nominal level uniformly over P\mathbf{P}. Without these stronger notions of convergence, the asymptotic approximations to the coverage probability or size may be poor, even in very large samples. Specific applications include the multivariate mean, testing moment inequalities, multiple testing, the empirical process and U-statistics.Comment: Published in at http://dx.doi.org/10.1214/12-AOS1051 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    EXPLICIT NONPARAMETRIC CONFIDENCE INTERVALS FOR THE VARIANCE WITH GUARANTEED COVERAGE

    Get PDF
    In this paper, we provide a method for constructing confidence intervals for the variance that exhibit guaranteed coverage probability for any sample size, uniformly over a wide class of probability distributions. In contrast, standard methods achieve guaranteed coverage only in the limit for a fixed distribution or for any sample size over a very restrictive (parametric) class of probability distributions. Of course, it is impossible to construct effective confidence intervals for the variance without some restriction, due to a result of Bahadur and Savage (1956). However, it is possible if the observations lie in a fixed compact set. We also consider the case of lower confidence bounds without any support restriction. Our method is based on the behavior of the variance over distributions that lie within a Kolmogorov-Smirnov confidence band for the underlying distribution. The method is a generalization of an idea of Anderson (1967), who considered only the case of the mean; it applies to very general parameters, and particularly the variance. While typically it is not clear how to compute these intervals explicitly, for the special case of the variance we provide an algorithm to do so. Asymptotically, the length of the intervals is of order n -1/2 in probability), so that, while providing guaranteed coverage, they are not overly conservative. A small simulation study examines the finite sample behavior of the proposed intervals.
    • …
    corecore